On the steplength selection in gradient methods for unconstrained optimization

نویسندگان

  • Daniela di Serafino
  • Valeria Ruggiero
  • Gerardo Toraldo
  • Luca Zanni
چکیده

The seminal paper by Barzilai and Borwein [IMA J. Numer. Anal. 8 (1988)] has given rise to an extensive investigation aimed at developing effective gradient methods, able to deal with large-scale optimization problems. Several steplength rules have been first designed for unconstrained quadratic problems and then extended to general nonlinear problems; these rules share the common idea of attempting to capture, in an inexpensive way, some second-order information. Our aim is to investigate the relationship between the steplengths of some gradient methods and the spectrum of the Hessian of the objective function, in order to provide insight into the computational effectiveness of these methods. We start the analysis in the framework of strongly convex quadratic problems, where the role of the eigenvalues of the Hessian matrix in the behaviour of gradient methods is better understood. Then we move to general unconstrained problems, focusing on natural extensions of some steplength rules analysed in the previous case. Our study suggests that, in the quadratic case, the methods that tend to use groups of small steplengths followed by some large steplengths, attempting to approximate the inverses of some eigenvalues of the Hessian matrix, exhibit better numerical behaviour. The methods considered in the general case seem to preserve the behaviour of their quadratic counterparts, in the sense that they appear to follow somehow the spectrum of the Hessian of the objective function during their progress toward a stationary point.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A New Hybrid Conjugate Gradient Method Based on Eigenvalue Analysis for Unconstrained Optimization Problems

In this paper‎, ‎two extended three-term conjugate gradient methods based on the Liu-Storey ({tt LS})‎ ‎conjugate gradient method are presented to solve unconstrained optimization problems‎. ‎A remarkable property of the proposed methods is that the search direction always satisfies‎ ‎the sufficient descent condition independent of line search method‎, ‎based on eigenvalue analysis‎. ‎The globa...

متن کامل

Acceleration of conjugate gradient algorithms for unconstrained optimization

Conjugate gradient methods are important for large-scale unconstrained optimization. This paper proposes an acceleration of these methods using a modification of steplength. The idea is to modify in a multiplicative manner the steplength k α , computed by Wolfe line search conditions, by means of a positive parameter k η , in such a way to improve the behavior of the classical conjugate gradien...

متن کامل

The modified BFGS method with new secant relation ‎for unconstrained optimization problems‎

Using Taylor's series we propose a modified secant relation to get a more accurate approximation of the second curvature of the objective function. Then, based on this modified secant relation we present a new BFGS method for solving unconstrained optimization problems. The proposed method make use of both gradient and function values while the usual secant relation uses only gradient values. U...

متن کامل

A Note on the Descent Property Theorem for the Hybrid Conjugate Gradient Algorithm CCOMB Proposed by Andrei

In [1] (Hybrid Conjugate Gradient Algorithm for Unconstrained Optimization J. Optimization. Theory Appl. 141 (2009) 249 - 264), an efficient hybrid conjugate gradient algorithm, the CCOMB algorithm is proposed for solving unconstrained optimization problems. However, the proof of Theorem 2.1 in [1] is incorrect due to an erroneous inequality which used to indicate the descent property for the s...

متن کامل

A New Descent Nonlinear Conjugate Gradient Method for Unconstrained Optimization

In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ensures the Zoutendijk condition to be held, this method is proved to be globally convergent. Finally, we improve it, and do further analysis.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Applied Mathematics and Computation

دوره 318  شماره 

صفحات  -

تاریخ انتشار 2018